1,253 research outputs found

    Privacy-Aware Processing of Biometric Templates by Means of Secure Two-Party Computation

    Get PDF
    The use of biometric data for person identification and access control is gaining more and more popularity. Handling biometric data, however, requires particular care, since biometric data is indissolubly tied to the identity of the owner hence raising important security and privacy issues. This chapter focuses on the latter, presenting an innovative approach that, by relying on tools borrowed from Secure Two Party Computation (STPC) theory, permits to process the biometric data in encrypted form, thus eliminating any risk that private biometric information is leaked during an identification process. The basic concepts behind STPC are reviewed together with the basic cryptographic primitives needed to achieve privacy-aware processing of biometric data in a STPC context. The two main approaches proposed so far, namely homomorphic encryption and garbled circuits, are discussed and the way such techniques can be used to develop a full biometric matching protocol described. Some general guidelines to be used in the design of a privacy-aware biometric system are given, so as to allow the reader to choose the most appropriate tools depending on the application at hand

    Distance Measures for Reduced Ordering Based Vector Filters

    Full text link
    Reduced ordering based vector filters have proved successful in removing long-tailed noise from color images while preserving edges and fine image details. These filters commonly utilize variants of the Minkowski distance to order the color vectors with the aim of distinguishing between noisy and noise-free vectors. In this paper, we review various alternative distance measures and evaluate their performance on a large and diverse set of images using several effectiveness and efficiency criteria. The results demonstrate that there are in fact strong alternatives to the popular Minkowski metrics

    An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences

    Get PDF
    Together with impressive advances touching every aspect of our society, AI technology based on Deep Neural Networks (DNN) is bringing increasing security concerns. While attacks operating at test time have monopolised the initial attention of researchers, backdoor attacks, exploiting the possibility of corrupting DNN models by interfering with the training process, represent a further serious threat undermining the dependability of AI techniques. In backdoor attacks, the attacker corrupts the training data to induce an erroneous behaviour at test time. Test-time errors, however, are activated only in the presence of a triggering event. In this way, the corrupted network continues to work as expected for regular inputs, and the malicious behaviour occurs only when the attacker decides to activate the backdoor hidden within the network. Recently, backdoor attacks have been an intense research domain focusing on both the development of new classes of attacks, and the proposal of possible countermeasures. The goal of this overview is to review the works published until now, classifying the different types of attacks and defences proposed so far. The classification guiding the analysis is based on the amount of control that the attacker has on the training process, and the capability of the defender to verify the integrity of the data used for training, and to monitor the operations of the DNN at training and test time. Hence, the proposed analysis is suited to highlight the strengths and weaknesses of both attacks and defences with reference to the application scenarios they are operating in

    An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences

    Get PDF
    Together with impressive advances touching every aspect of our society, AI technology based on Deep Neural Networks (DNN) is bringing increasing security concerns. While attacks operating at test time have monopolised the initial attention of researchers, backdoor attacks, exploiting the possibility of corrupting DNN models by interfering with the training process, represent a further serious threat undermining the dependability of AI techniques. In backdoor attacks, the attacker corrupts the training data to induce an erroneous behaviour at test time. Test-time errors, however, are activated only in the presence of a triggering event. In this way, the corrupted network continues to work as expected for regular inputs, and the malicious behaviour occurs only when the attacker decides to activate the backdoor hidden within the network. Recently, backdoor attacks have been an intense research domain focusing on both the development of new classes of attacks, and the proposal of possible countermeasures. The goal of this overview is to review the works published until now, classifying the different types of attacks and defences proposed so far. The classification guiding the analysis is based on the amount of control that the attacker has on the training process, and the capability of the defender to verify the integrity of the data used for training, and to monitor the operations of the DNN at training and test time. Hence, the proposed analysis is suited to highlight the strengths and weaknesses of both attacks and defences with reference to the application scenarios they are operating in

    Aspetti medico-legali

    Get PDF

    Appendicite aigue non compliquée : y a-t- il une place pour le traitement conservateur

    Get PDF
    Le but de cette étude a été d'évaluer l'efficacité de l'antibiothérapie seule dans le traitement des appendicites aigues non compliquées. C'est une étude prospective, intéressant 68 patients ayant eu une appendicite aigue simple, de confirmation radiologique, traités par l'amoxicilline associée à l'acide clavulanique pendant 10 jours. L'appendicectomie a été réalisée en cas d'aggravation ou en cas de non amélioration au bout de 48heures de traitement. Le traitement conservateur a été efficace dans 82,35% avec une résolution complète des symptômes chez 56 patients. Les 12 cas restants (17,65%) ont subit une appendicectomie. l'appendicite a été gangréneuse dans 8 cas et phlegmoneuse dans 4 cas. Cinq des 56 patients, qui ont bien évolué sous traitement conservateur, ont été réadmis et opérés pour récidive, soit 8,9%. Deux cas ont eu une appendicite compliquée. L'appendicectomie reste le traitement de référence pour l'appendicite aigue, mais le traitement antibiotique peut être proposé en première intension à des patients présentant une appendicite aigue non compliquée

    Removal and injection of keypoints for SIFT-based copy-move counter-forensics

    Get PDF
    Recent studies exposed the weaknesses of scale-invariant feature transform (SIFT)-based analysis by removing keypoints without significantly deteriorating the visual quality of the counterfeited image. As a consequence, an attacker can leverage on such weaknesses to impair or directly bypass with alarming efficacy some applications that rely on SIFT. In this paper, we further investigate this topic by addressing the dual problem of keypoint removal, i.e., the injection of fake SIFT keypoints in an image whose authentic keypoints have been previously deleted. Our interest stemmed from the consideration that an image with too few keypoints is per se a clue of counterfeit, which can be used by the forensic analyst to reveal the removal attack. Therefore, we analyse five injection tools reducing the perceptibility of keypoint removal and compare them experimentally. The results are encouraging and show that injection is feasible without causing a successive detection at SIFT matching level. To demonstrate the practical effectiveness of our procedure, we apply the best performing tool to create a forensically undetectable copy-move forgery, whereby traces of keypoint removal are hidden by means of keypoint injection

    A Framework for Decision Fusion in Image Forensics Based on Dempster-Shafer Theory of Evidence

    Get PDF
    In this work, we present a decision fusion strategy for image forensics. We define a framework that exploits information provided by available forensic tools to yield a global judgment about the authenticity of an image. Sources of information are modeled and fused using Dempster-Shafer Theory of Evidence, since this theory allows us to handle uncertain answers from tools and lack of knowledge about prior probabilities better than the classical Bayesian approach. The proposed framework permits us to exploit any available information about tools reliability and about the compatibility between the traces the forensic tools look for. The framework is easily extendable: new tools can be added incrementally with a little effort. Comparison with logical disjunction- and SVM-based fusion approaches shows an improvement in classification accuracy, particularly when strong generalization capabilities are neede
    • …
    corecore